摘要 :
The ℤ
2
ℤ
4
ℤ
8
-additive codes are subgroups of $\mathbb{Z}_2^{{\alpha _1}} \times \mathbb{Z}_4^{{\alpha _2}} \times \mathbb{Z}_8^{{\alpha _3}}$. A ℤ
2
ℤ
4
ℤ
8
-linear Hadamard code is a Hadamard code which is the Gray map ...
展开
The ℤ
2
ℤ
4
ℤ
8
-additive codes are subgroups of $\mathbb{Z}_2^{{\alpha _1}} \times \mathbb{Z}_4^{{\alpha _2}} \times \mathbb{Z}_8^{{\alpha _3}}$. A ℤ
2
ℤ
4
ℤ
8
-linear Hadamard code is a Hadamard code which is the Gray map image of a ℤ
2
ℤ
4
ℤ
8
-additive code. In this paper, we generalize some known results for ${\mathbb{Z}_2}{\mathbb{Z}_4}$-linear Hadamard codes to ℤ
2
ℤ
4
ℤ
8
-linear Hadamard codes with ${\alpha _1} \ne 0$, ${\alpha _2} \ne 0$, and ${\alpha _3} \ne 0$. First, we give a recursive construction of ℤ
2
ℤ
4
ℤ
8
-additive Hadamard codes of type $\left( {{\alpha _1},{\alpha _2},{\alpha _3};{t_1},{t_2},{t_3}} \right)$ with ${t_1} \geq 1,{t_2} \geq 0$, and ${t_3} \geq 1$. Then, we show for which types the corresponding ℤ
2
ℤ
4
ℤ
8
-linear Hadamard codes are nonlinear over ${\mathbb{Z}_2}$. Moreover, we show that, unlike ${\mathbb{Z}_2}{\mathbb{Z}_4}$-linear Hadamard codes, in general, this family of ℤ
2
ℤ
4
ℤ
8
-linear Hadamard codes does not include the family of ℤ
4
-linear or ${\mathbb{Z}_8}$-linear Hadamard codes. Actually, we show that, for example, for length ${2^{11}}$, the constructed nonlinear ℤ
2
ℤ
4
ℤ
8
-linear Hadamard codes are not equivalent to each other, nor to any ${\mathbb{Z}_2}{\mathbb{Z}_4}$-linear Hadamard, nor to any previously constructed ${\mathbb{Z}_{{2^s}}}$-linear Hadamard code, with $s \geq 2$.
收起
摘要 :
Cyclic AN codes are considered to be suitable for error detection and error correction in arithmetic operations. Cyclic AN codes using radix 2/sup k/ expressions are called 2/sup k/-ary cyclic AN codes. The paper is concerned with...
展开
Cyclic AN codes are considered to be suitable for error detection and error correction in arithmetic operations. Cyclic AN codes using radix 2/sup k/ expressions are called 2/sup k/-ary cyclic AN codes. The paper is concerned with 2/sup k/-ary cyclic AN codes for burst error correction. After describing the structure of these codes and arithmetic burst errors, we present some necessary conditions for 2/sup k/-ary cyclic AN codes to have any burst error correcting ability and describe the relation between the burst error correcting ability of the binary cyclic AN code and that of the 2/sup k/-ary cyclic AN code.
收起
摘要 :
Consider M-valued (M$\geq$3) classification systems realized by combination of N(N$\geq\lceil\log_{2}$M$\rceil$) binary classifiers. Such a construction method is called an Error Correcting Output Code (ECOC). First, focusing on a...
展开
Consider M-valued (M$\geq$3) classification systems realized by combination of N(N$\geq\lceil\log_{2}$M$\rceil$) binary classifiers. Such a construction method is called an Error Correcting Output Code (ECOC). First, focusing on a Reed-Muller (RM) code, we derive a modified RM (mRM) code to make it suitable for the ECOC. Using the mRM code and the Hadamard matrix, we introduce a simplex code which is one of the powerful equidistant codes. Next, from the viewpoint of system evaluation model, we evaluate the ECOC by using constructive coding described above. We show that they have desirable properties such as Flexible, Elastic, and Effective Elastic as M becomes large, by employing analytical formulas and experiments.
收起
摘要 :
Most existing hashing approaches usually impose some artificial constraints (e.g., uncorrelated and balanced) on hash functions to learn high-quality binary codes, and utilize an optimization strategy which is typically compatible...
展开
Most existing hashing approaches usually impose some artificial constraints (e.g., uncorrelated and balanced) on hash functions to learn high-quality binary codes, and utilize an optimization strategy which is typically compatible with these hash functions. However, these tight constraints potentially restrict the flexibility of hash functions to fit training data, and result in complicated optimization problem. In this paper, we propose a learning-based hashing method called 'deep supervised hashing with target code'(DSHT) to distill the desirable property in the target coding into hash functions to generate high-quality binary codes. Meanwhile, we incorporate the disparity learning of the intra-class into our proposed method for its generalization. Benefiting from recent advances in deep learning, our framework constructs hash functions as a latent hashing layer in a deep neural network in which binary hashing representations are learned with the guide of target code and semantic information. Experiments on two two large-scale image dataset (MNIST, CIFAR-10) demonstrate that the proposed framework is available, flexible and show comparable performance against other state-of-the-art hashing methods.
收起
摘要 :
Most existing hashing approaches usually impose some artificial constraints (e.g., uncorrelated and balanced) on hash functions to learn high-quality binary codes, and utilize an optimization strategy which is typically compatible...
展开
Most existing hashing approaches usually impose some artificial constraints (e.g., uncorrelated and balanced) on hash functions to learn high-quality binary codes, and utilize an optimization strategy which is typically compatible with these hash functions. However, these tight constraints potentially restrict the flexibility of hash functions to fit training data, and result in complicated optimization problem. In this paper, we propose a learning-based hashing method called 'deep supervised hashing with target code'(DSHT) to distill the desirable property in the target coding into hash functions to generate high-quality binary codes. Meanwhile, we incorporate the disparity learning of the intra-class into our proposed method for its generalization. Benefiting from recent advances in deep learning, our framework constructs hash functions as a latent hashing layer in a deep neural network in which binary hashing representations are learned with the guide of target code and semantic information. Experiments on two two large-scale image dataset (MNIST, CIFAR-10) demonstrate that the proposed framework is available, flexible and show comparable performance against other state-of-the-art hashing methods.
收起
摘要 :
We review the recently introduced soft-aided bit-marking (SABM) algorithm and its suitability for product codes. Some aspects of the implementation of the SABM algorithm are discussed. The influence of suboptimal channel soft info...
展开
We review the recently introduced soft-aided bit-marking (SABM) algorithm and its suitability for product codes. Some aspects of the implementation of the SABM algorithm are discussed. The influence of suboptimal channel soft information is also analyzed.
收起
摘要 :
We review the recently introduced soft-aided bit-marking (SABM) algorithm and its suitability for product codes. Some aspects of the implementation of the SABM algorithm are discussed. The influence of suboptimal channel soft info...
展开
We review the recently introduced soft-aided bit-marking (SABM) algorithm and its suitability for product codes. Some aspects of the implementation of the SABM algorithm are discussed. The influence of suboptimal channel soft information is also analyzed.
收起
摘要 :
Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [30, 73], and several-hundred-qubit machines are around the corner. Machines of this scale ha...
展开
Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [30, 73], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes-planar and double-defect surface codes-using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.
收起
摘要 :
Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [30, 73], and several-hundred-qubit machines are around the corner. Machines of this scale ha...
展开
Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [30, 73], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes-planar and double-defect surface codes-using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.
收起